Approximation bounds for smooth functions in C(Rd) by neural and mixture networks

نویسندگان

  • Vitaly Maiorov
  • Ron Meir
چکیده

We consider the approximation of smooth multivariate functions in C(IRd) by feedforward neural networks with a single hidden layer of nonlinear ridge functions. Under certain assumptions on the smoothness of the functions being approximated and on the activation functions in the neural network, we present upper bounds on the degree of approximation achieved over the domain IRd, thereby generalizing available results for compact domains. We extend the approximation results to the so-called mixture of expert architecture, which has received considerable attention in recent years, showing that the same type of approximation bound may be achieved.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Approximation Bounds for Smooth Functions in C(IRd) by Neural and Mixture Networks

We consider the approximation of smooth multivariate functions in C(I R d) by feedforward neural networks with a single hidden layer of non-linear ridge functions. Under certain assumptions on the smoothness of the functions being approximated and on the activation functions in the neural network, we present upper bounds on the degree of approximation achieved over the domain IR d , thereby gen...

متن کامل

Novel Radial Basis Function Neural Networks based on Probabilistic Evolutionary and Gaussian Mixture Model for Satellites Optimum Selection

In this study, two novel learning algorithms have been applied on Radial Basis Function Neural Network (RBFNN) to approximate the functions with high non-linear order. The Probabilistic Evolutionary (PE) and Gaussian Mixture Model (GMM) techniques are proposed to significantly minimize the error functions. The main idea is concerning the various strategies to optimize the procedure of Gradient ...

متن کامل

Error bounds for approximations with deep ReLU networks

We study expressive power of shallow and deep neural networks with piece-wise linear activation functions. We establish new rigorous upper and lower bounds for the network complexity in the setting of approximations in Sobolev spaces. In particular, we prove that deep ReLU networks more efficiently approximate smooth functions than shallow networks. In the case of approximations of 1D Lipschitz...

متن کامل

Optimal Approximation with Sparsely Connected Deep Neural Networks

We derive fundamental lower bounds on the connectivity and the memory requirements of deep neural networks guaranteeing uniform approximation rates for arbitrary function classes inL2(Rd). In other words, we establish a connection between the complexity of a function class and the complexity of deep neural networks approximating functions from this class to within a prescribed accuracy. Additio...

متن کامل

On the near optimality of the stochastic approximation of smooth functions by neural networks

We consider the problem of approximating the Sobolev class of functions by neural networks with a single hidden layer, establishing both upper and lower bounds. The upper bound uses a probabilistic approach, based on the Radon and wavelet transforms, and yields similar rates to those derived recently under more restrictive conditions on the activation function. Moreover, the construction using ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • IEEE transactions on neural networks

دوره 9 5  شماره 

صفحات  -

تاریخ انتشار 1998